Software Architecture

From the dream of a long-lasting software architecture

With 3 principles to stable and intuitive code

Christian Dangl

Long-lasting software is probably everyone's dream, but it’s often not a reality. By using three simple methods, it is possible to create an architecture that keeps a stable and clear line throughout the project - and if something goes wrong, a mistake is noticed immediately. This article shows which methods these are.

Everyone knows this scenario: You start a new project with high ambitions and expectations. This time everything will be better, more beautiful, faster, simpler, and at the same time, more intuitive in the area of software architecture. Most of the time this works well at the beginning, but there eventually comes a point where the pressure to finish becomes greater and greater. In the case of bugs, more emphasis is placed on applying a band-aid than on healing the actual wound. And so, step-by-step, the software becomes more intertwined, more complex, and more difficult to maintain. But what the heck, the next project will be better…right? Anyone who is honest with themselves will have experienced this problem more than once in their everyday work or in project teams. But how can future problems already be addressed during the development phase, how can the foundations be laid now to solve them with less effort? You can start in the early project phase with the use of immutability, which is also the first of the three methods that we will take a closer look at in this article.

Immutability

Avoiding side effects starts at the lowest level – with the immutability approach. Immutability means that objects should no longer change with regard to their properties. This approach is not at all untypical for our world. We should all be aware that, for the most part, we are not creating new inventions in the field of programming, but are often merely reflecting our real world as a digital image. Now, when modeling a Person class based on our real world, does it make sense to provide a setBirthday? Or should this only take place once in the constructor? Probably the latter! It just doesn’t make sense that the same person suddenly has a different date of birth. This could be applied to many other properties of our person (place of birth, origin, etc.). So could this be a sign that most of the properties are passed unchanged in the constructor and our person has almost no setters anymore?

This does not mean that you should no longer create setters, but in the long run, you’ll be thankful to do without as many of them as possible and to set the greater part of the properties via the constructor. But what do you do if you actually want to change one of these non-changeable properties? For this we move away from our example person and look at two simple ways with a more abstract example: Either our requirement is a property that can be changed in a controlled way, possibly a hash value, in which case there could be the function regenerateHash() to have this done cleanly according to the OOP approach. Or you have to be honest enough to recognize that if this property is changed, it is a new identity for the object. Thus, a new object must be instantiated. The area of re-creation with a multitude of constructor arguments often seems especially time-consuming and quickly tempts us to introduce renewed setters. However, you might also ask the simple question, “If I create the identity of my object in a defined way, why would I need to change and recreate it over and over again to get to my goal? What am I doing wrong in my modeling?”

And with that, it becomes clear that developers like to take the easier path sometimes, and quickly loses sight of the desired architecture’s big picture. A person with a date of birth is not that far-fetched, but what do these theoretical approaches mean for a “real” application? Let’s assume we have a simple HTTP client. This is correctly configured via a factory with regard to connection timeout, exception handling, and base URL. Now we use this exact instance in our somewhat long and suboptimal flow logic – this means many if-statements and loops with a length of a few hundred lines of code, i.e. nothing according to the textbook, but which nevertheless occurs in reality. Now, it can be that the upper part of an algorithm should work with a request timeout of maximum 20 seconds, but the code further down must be reduced to a maximum of 10 seconds and uses the corresponding setter of the timeout. If then a big loop is put over both parts, you quickly find yourself back in the first part of the algorithm, which now works with a timeout of 10 seconds instead of the required 20. It gets even more complex when using conditional changes as well as passing our object to subfunctions or classes via DI. In our example, we go through a series of subscriptions. In the case of PHPMagazine and EntwicklerMagazin, we have to reduce to a maximum timeout of 10 seconds for special data for various reasons, but this has an unintended effect on subsequent subscription requests, which can be proven to take between 10 and a maximum of 20 seconds in exceptional cases (Listing 1).

public function fetchSubscriptions(HTTPClient $client) : void
{
  foreach ($this->subscriptions as $subscription) {
 
    $key = $subscription['key'];
 
    # fetch subscription status with timeout 20...or maybe 10????
    $status = $client->get('https://...', $key);
 
	# ..........
	# a lot lot more
	# ..........
 
    if ($key === 'PHPMagazin' || $key === 'EntwicklerMagazin') {
  	# set timeout to 10 seconds for our special data
  	$client->setTimeout(10);
  	# fetch special data
  	$data = $client->get('https://....', $key);
    }
  }
}

The example given here is pure fiction, but the basic problem behind it arises from reality and occurred in this way. So what is the solution? Quite simply, there are two separate HTTPClient objects for the upper and the lower part of the algorithm. Additionally, our timeout is not changeable! Each part of the algorithm simply uses the correct object for it instead of adapting an existing one.

public function fetchSubscriptions(HTTPClient $statusClient, HTTPClient $dataClient) : void;

Just as you can’t change other people in real life the way you would like, you shouldn’t do it with objects in programming. In addition, this example shows us that immutability is not just about properties with primitive data types (Birthday of Person), but also about service classes and other necessary architectural approaches in the areas of reuse, dependency injection, etc. Now that we have integrated the immutability approach, we have a solid and robust base for the layers of our application. It’s time to use that foundation to actually build an application. But what’s the best way to do that? With the “Fail Fast” approach – and we’ll take a closer look at it below.

Fail Fast

To use this method, we must first understand what it actually says by definition. The fail-fast approach means that the system is designed to detect errors at an early stage. In the event of a problem, the system should terminate early and not continue to operate. Of course, one now wonders what the advantages are of terminating a system early: Don’t we actually want to create stable software that just doesn’t terminate? That’s completely correct!

However, the goal of simple termination is only a secondary focus. The more important aspect here is the resulting algorithm or the path it takes in the event of an error. Especially in the case of problems and exceptions, a process often turns out to be mysterious and anything but linear. One of the reasons for complex and unmanageable processes is that our algorithm is developed by humans. Even though we create many things, it is in the nature of humans to make many mistakes in the process. The first step is to recognize this and adjust accordingly. The species “programmer” tends to double- and triple-check everything in order to hide even the smallest errors to prevent problems. Mixing dozens of try/catch statements with the well-known if !== NULL queries are still a reality that even the author encounters almost daily in his work. But is that a bad thing at all? It’s a classic “it works” approach, you might say. And yes, this path also leads to Rome, no question, but it doesn’t have to be so bumpy.

With every single statement that we integrate to protect our code from error sources, we create further code that makes our software more complex, bloats it, and could itself cause new errors. A “parachute” against general errors would already exist – but it is called a unit test and has nothing to do with the actual source code. So what if we completely abandoned our safeguards? What if we create a linear code that we assume will work? And if not, then this should not be continued. This may sound utopian, but it is a simple and ingenious approach. Let’s take a look at an example: We have a simple monitoring list in which we output our subscriptions. To do this, we iterate over an array of existing IDs, get the respective object from the database using our helper function, and display the name (Listing 2). So far, so good.

foreach ($subscriptionIDs as $id) {
 
  # load subscription of $id from database
  $subscription = $this->getSubscription($id);
 
  if (!$subscription instanceOf Subscription) {
    continue; 
  }
 
  echo "Name: " . $subscription->getName() . PHP_EOL;
}
 
private function getSubscription(int $id) : ?Subscription
{
  try {
    return $this->repository->find($id);
  } catch (EntityNotFoundException $ex) {
    return null; 
  }
}

Unfortunately, our developer got a little sloppy with the modeling. The helper function getSubscription($id) returns a nullable object, which is a nice approach at first glance. But let’s look at what this actually models and defines: An exception defines a state that is faulty or undefined. So for our helper function, according to the concept, it is perfectly fine if a subscription is not found by ID. In this case, null is returned – but this changes the approach of this function. We don’t have a “give me the ID that must exist!” but instead an “I’ll try any ID and if it happens to exist, we will try it!” Both give the same end result at first glance – but is a completely different approach in modeling and making sense of our function and its quality.

So back in our main logic, we now need to check if it is indeed a Subscription object. If not, we simply ignore the fact that our ID list contains an invalid value and move on. So the end result of our monitoring list is bloated code that shows only a few subscriptions (that were found) and additionally, we never know if invalid values (possibly due to a programming error) exist in our list of IDs. The customer thanks us. Now let’s try another, somewhat tidier approach. Here we dispense with nullable return types and assume that a subscription is found. The specification of a correct ID that exists in the system is now the responsibility of the calling class and layer. This means that our auxiliary function is only one line and can be omitted immediately. By means of the definition “here comes a subscription and end” we also no longer have to check whether this exists. Instead, we simply output the name – this should be found, after all. In case an error occurs somewhere here, an exception should be visible immediately – after all, it is a monitoring list, and it isn’t possible that an ID does not exist. Or is it? Do we then have a problem with data consistency? In this case, we would have to act immediately. Perfect, now an exception comes:

foreach ($subscriptionIDs as $id) {
 
  # load subscription of $id from database
  $subscription = $this->repository->find($id);
 
  echo "Name: " . $subscription->getName() . PHP_EOL;
}

This somewhat simpler example is intended to illustrate that methodology and style do indeed have an impact on the quality and sustainability of the software. In reality, we are not dealing with a single foreach loop in such cases, but with large functions, classes as well as subclasses, distributed over several layers. And here it quickly becomes difficult, almost even negligent, if the code is no longer under control in the long run due to well-intentioned safeguards. This brings us back to the beginning of our Fail Fast chapter.

Errors are human – so why hide them? No software comes onto the market without errors. Wouldn’t it be much more charming to accept this fact and instead make sure that such errors appear directly, can be found quickly, and fixed immediately? It’s a possibly rigorous and atypical approach, but one that is convincing on closer inspection. But how do we proceed if we want to prevent a potential customer or visitor to a website from seeing an unintentional error? Does this call into question the fail-fast principle or can it still be combined? We’ll get to that in the third area – exception handling and goal-oriented logging.

Exceptions and Logging

One of the biggest mistakes in the area of exception handling is that it’s often just a matter of getting started. Again and again we encounter suboptimal, even meaningless exception handlings that neither create logs nor provide any added value for the application. Of course, it is super easy to put a try/catch around a code block without any concept. Yes, the author has also been through this period of personal development. But what exactly is the added value of such an approach? The answer: There is none. With error handling scattered over all layers of the application, you quickly come back to the now familiar problem from the fail-fast approach. Because of the complexity of a set of structures, it is no longer certain where the algorithm clearly begins and where it ends. The modeling of an exception chain and its logging can be considered an art form.

To return to our 3-step methodology, we now need to clarify how exceptions should be set and logged. After the basic model of our architecture was created as linear as possible using the immutability and fail-fast approach, we now have a single and simple point of failure – the entry points of our application. This can be a controller action or a dispatch subscriber, but a CLI command of our application is also possible. Each of these entrypoints gets a simple, preferably global error handling via try/catch. This gives us the chance to immediately influence the UX/UI chain in case of any unexpected error and mask any errors, redirect to an error page and – very important – log these errors. If you’ve been wondering why exceptions and logging are combined into a single point, let me tell you: correct error handling and purposeful logging just belong together. It makes no sense to create exceptions that are not logged, nor to double and triple log all possible errors and warnings just because a logger is used in each catch block for safety. So now this means that our try/catch statement in our entrypoint gets a single clean ERROR log entry, based on the PRS logger interface of course. This automatically gives us a single point of error logging without complex structures.

Within the catch block we now have the option to either end the flow cleanly and show the website visitor a nice error page, or to set the exit code of our CLI command accordingly so that this still terminates by error and is not unintentionally seen as OK. In the next steps, we can add entries to our log story accordingly, if necessary. It is also allowed to perform individual error handling on deeper layers. Only: What is the procedure when such error handling is required in one of the deeper layers, if we do not want to fall back into old problems? In general, with the approaches described here, you should always ask yourself whether it makes sense to catch an error and continue executing the algorithm. Always be aware of side effects and visual ambiguities in the subsequent code. Catching errors should always happen in these layers as typed as possible. It is better to have an EntityNotFoundException than a Global Exception that listens for everything. In general: If an exception is caught and the algorithm is to be terminated according to the fail-fast approach, either the original exception should be thrown again after handling (e.g. log entry), or a new exception should be created, which is given the original as a previous exception. In the end, this also preserves the important stack trace.

Conclusion

Not all programming is the same! If you want to get something out of your application in the long term, you should think carefully about whether it’s  enough if everything works normally, or whether it would be advantageous to also think about error cases and debugging phases and to optimize them in advance. In particular, the latter are time wasters that have a negative impact on budget and customer satisfaction. With the combination of these three principles and methodologies, you do not automatically write perfectly functioning code, but the often neglected area off the beaten path benefits from reduced complexity, selective error handling, and a much easier resolution of bugs and problems.

Top Articles About Software Architecture

Stay tuned!

Register for our newsletter

Behind the Tracks of IPC

PHP Core
Best practices & applications

General Web Development
Broader web development topics

Test & Performance
Software testing and performance improvements

Agile & People
Getting agile right is so important

Software Architecture
All about PHP frameworks, concepts &
environments

DevOps & Deployment
Learn about DevOps and transform your development pipeline

Content Management Systems
Sessions on content management systems

#slideless (pure coding)
See how technology really works

Web Security
All about
web security

PUSH YOUR CODE FURTHER